Conditional gradient algorithms with open loop step size rules
نویسندگان
چکیده
منابع مشابه
Lazifying Conditional Gradient Algorithms
Conditional gradient algorithms (also often called Frank-Wolfe algorithms) are popular due to their simplicity of only requiring a linear optimization oracle and more recently they also gained significant traction for online learning. While simple in principle, in many cases the actual implementation of the linear optimization oracle is costly. We show a general method to lazify various conditi...
متن کاملA stochastic gradient adaptive filter with gradient adaptive step size
This paper presents an adaptive step-size gradient adaptive filter. The step size of the adaptive filter is changed according to a gradient descent algorithm designed to reduce the squared estimation error during each iteration. An approximate analysis of the performance of the adaptive filter when its inputs are zero mean, white, and Gaussian and the set of optimal coefficients are time varyin...
متن کاملConditional gradient algorithms for machine learning
We consider penalized formulations of machine learning problems with regularization penalty having conic structure. For several important learning problems, state-of-the-art optimization approaches such as proximal gradient algorithms are difficult to apply and computationally expensive, preventing from using them for large-scale learning purpose. We present a conditional gradient algorithm, wi...
متن کاملAdaptive Step-Size for Policy Gradient Methods
In the last decade, policy gradient methods have significantly grown in popularity in the reinforcement–learning field. In particular, they have been largely employed in motor control and robotic applications, thanks to their ability to cope with continuous state and action domains and partial observable problems. Policy gradient researches have been mainly focused on the identification of effe...
متن کاملA Convergent Incremental Gradient Method with a Constant Step Size
Abstract. An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits regions in which the gradient is small infinitely often. Under certain unimodality assu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Mathematical Analysis and Applications
سال: 1978
ISSN: 0022-247X
DOI: 10.1016/0022-247x(78)90137-3